25 research outputs found

    Beyond convergence rates: Exact recovery with Tikhonov regularization with sparsity constraints

    Full text link
    The Tikhonov regularization of linear ill-posed problems with an â„“1\ell^1 penalty is considered. We recall results for linear convergence rates and results on exact recovery of the support. Moreover, we derive conditions for exact support recovery which are especially applicable in the case of ill-posed problems, where other conditions, e.g. based on the so-called coherence or the restricted isometry property are usually not applicable. The obtained results also show that the regularized solutions do not only converge in the â„“1\ell^1-norm but also in the vector space â„“0\ell^0 (when considered as the strict inductive limit of the spaces Rn\R^n as nn tends to infinity). Additionally, the relations between different conditions for exact support recovery and linear convergence rates are investigated. With an imaging example from digital holography the applicability of the obtained results is illustrated, i.e. that one may check a priori if the experimental setup guarantees exact recovery with Tikhonov regularization with sparsity constraints

    Necessary and sufficient conditions of solution uniqueness in â„“1\ell_1 minimization

    Full text link
    This paper shows that the solutions to various convex ℓ1\ell_1 minimization problems are \emph{unique} if and only if a common set of conditions are satisfied. This result applies broadly to the basis pursuit model, basis pursuit denoising model, Lasso model, as well as other ℓ1\ell_1 models that either minimize f(Ax−b)f(Ax-b) or impose the constraint f(Ax−b)≤σf(Ax-b)\leq\sigma, where ff is a strictly convex function. For these models, this paper proves that, given a solution x∗x^* and defining I=\supp(x^*) and s=\sign(x^*_I), x∗x^* is the unique solution if and only if AIA_I has full column rank and there exists yy such that AITy=sA_I^Ty=s and ∣aiTy∣∞<1|a_i^Ty|_\infty<1 for i∉Ii\not\in I. This condition is previously known to be sufficient for the basis pursuit model to have a unique solution supported on II. Indeed, it is also necessary, and applies to a variety of other ℓ1\ell_1 models. The paper also discusses ways to recognize unique solutions and verify the uniqueness conditions numerically.Comment: 6 pages; revised version; submitte

    Low Complexity Regularization of Linear Inverse Problems

    Full text link
    Inverse problems and regularization theory is a central theme in contemporary signal processing, where the goal is to reconstruct an unknown signal from partial indirect, and possibly noisy, measurements of it. A now standard method for recovering the unknown signal is to solve a convex optimization problem that enforces some prior knowledge about its structure. This has proved efficient in many problems routinely encountered in imaging sciences, statistics and machine learning. This chapter delivers a review of recent advances in the field where the regularization prior promotes solutions conforming to some notion of simplicity/low-complexity. These priors encompass as popular examples sparsity and group sparsity (to capture the compressibility of natural signals and images), total variation and analysis sparsity (to promote piecewise regularity), and low-rank (as natural extension of sparsity to matrix-valued data). Our aim is to provide a unified treatment of all these regularizations under a single umbrella, namely the theory of partial smoothness. This framework is very general and accommodates all low-complexity regularizers just mentioned, as well as many others. Partial smoothness turns out to be the canonical way to encode low-dimensional models that can be linear spaces or more general smooth manifolds. This review is intended to serve as a one stop shop toward the understanding of the theoretical properties of the so-regularized solutions. It covers a large spectrum including: (i) recovery guarantees and stability to noise, both in terms of â„“2\ell^2-stability and model (manifold) identification; (ii) sensitivity analysis to perturbations of the parameters involved (in particular the observations), with applications to unbiased risk estimation ; (iii) convergence properties of the forward-backward proximal splitting scheme, that is particularly well suited to solve the corresponding large-scale regularized optimization problem

    Greedy Solution of Ill-Posed Problems: Error Bounds and Exact Inversion

    Full text link
    The orthogonal matching pursuit (OMP) is an algorithm to solve sparse approximation problems. Sufficient conditions for exact recovery are known with and without noise. In this paper we investigate the applicability of the OMP for the solution of ill-posed inverse problems in general and in particular for two deconvolution examples from mass spectrometry and digital holography respectively. In sparse approximation problems one often has to deal with the problem of redundancy of a dictionary, i.e. the atoms are not linearly independent. However, one expects them to be approximatively orthogonal and this is quantified by the so-called incoherence. This idea cannot be transfered to ill-posed inverse problems since here the atoms are typically far from orthogonal: The ill-posedness of the operator causes that the correlation of two distinct atoms probably gets huge, i.e. that two atoms can look much alike. Therefore one needs conditions which take the structure of the problem into account and work without the concept of coherence. In this paper we develop results for exact recovery of the support of noisy signals. In the two examples in mass spectrometry and digital holography we show that our results lead to practically relevant estimates such that one may check a priori if the experimental setup guarantees exact deconvolution with OMP. Especially in the example from digital holography our analysis may be regarded as a first step to calculate the resolution power of droplet holography

    Matrix-free interior point method for compressed sensing problems

    Get PDF
    We consider a class of optimization problems for sparse signal reconstruction which arise in the field of Compressed Sensing (CS). A plethora of approaches and solvers exist for such problems, for example GPSR, FPC AS, SPGL1, NestA, \ell_{1}_\ell_{s}, PDCO to mention a few. Compressed Sensing applications lead to very well conditioned optimization problems and therefore can be solved easily by simple first-order methods. Interior point methods (IPMs) rely on the Newton method hence they use the second-order information. They have numerous advantageous features and one clear drawback: being the second-order approach they need to solve linear equations and this operation has (in the general dense case) an O(n3)O(n^3) computational complexity. Attempts have been made to specialize IPMs to sparse reconstruction problems and they have led to interesting developments implemented in â„“1_â„“s\ell_1\_\ell_s and PDCO softwares. We go a few steps further. First, we use the matrix-free interior point method, an approach which redesigns IPM to avoid the need to explicitly formulate (and store) the Newton equation systems. Secondly, we exploit the special features of the signal processing matrices within the matrix-free IPM. Two such features are of particular interest: an excellent conditioning of these matrices and the ability to perform inexpensive (low complexity) matrix-vector multiplications with them. Computational experience with large scale one-dimensional signals confirms that the new approach is efficient and offers an attractive alternative to other state-of-the-art solvers

    A novel workflow for seismic net pay estimation with uncertainty

    No full text
    corecore